目前的地震设计代码主要依赖于结构构件的强度和位移能力,并且不考虑地面运动持续时间或滞后行为特征的影响。基于能量的方法用作响应量的补充指标,包括重复载荷在地震性能中的效果。设计理念表明,结构构件的能量耗散能力满足了地震要求。因此,应当很好地理解结构构件的能量耗散行为,以实现有效的基于能量的设计方法。本研究重点介绍钢筋混凝土(RC)剪切墙的能量耗散能力,这些剪切壁广泛用于高地震区,因为它们提供了抗侧向力的显着刚度和强度。基于机器学习(高斯过程回归(GPR))的剪力墙能量耗散能力的预测模型是墙面设计参数的函数。显示十八个设计参数来影响能量耗散,而最重要的是通过施加顺序向后消除并通过使用特征选择方法来确定预测模型的复杂性来确定。所提出的模型使稳健和准确的预测的能力基于具有预测精度的新数据(预测/实际值的比率)约为1.00的新数据和0.93的确定系数(R2)。本研究的结果被认为是(i)的基于能量的方法(i)限定了剪力墙地震能量耗散能力的最有影响力的墙壁性能和(ii)提供了能够实现不同墙体设计配置的比较的预测模型实现更高的能量耗散能力。
translated by 谷歌翻译
With the growing global deployment of carbon capture and sequestration technology to combat climate change, monitoring and detection of potential CO2 leakage through existing or storage induced faults are critical to the safe and long-term viability of the technology. Recent work on time-lapse seismic monitoring of CO2 storage has shown promising results in its ability to monitor the growth of the CO2 plume from surface recorded seismic data. However, due to the low sensitivity of seismic imaging to CO2 concentration, additional developments are required to efficiently interpret the seismic images for leakage. In this work, we introduce a binary classification of time-lapse seismic images to delineate CO2 plumes (leakage) using state-of-the-art deep learning models. Additionally, we localize the leakage region of CO2 plumes by leveraging Class Activation Mapping methods.
translated by 谷歌翻译
For best performance, today's semantic segmentation methods use large and carefully labeled datasets, requiring expensive annotation budgets. In this work, we show that coarse annotation is a low-cost but highly effective alternative for training semantic segmentation models. Considering the urban scene segmentation scenario, we leverage cheap coarse annotations for real-world captured data, as well as synthetic data to train our model and show competitive performance compared with finely annotated real-world data. Specifically, we propose a coarse-to-fine self-training framework that generates pseudo labels for unlabeled regions of the coarsely annotated data, using synthetic data to improve predictions around the boundaries between semantic classes, and using cross-domain data augmentation to increase diversity. Our extensive experimental results on Cityscapes and BDD100k datasets demonstrate that our method achieves a significantly better performance vs annotation cost tradeoff, yielding a comparable performance to fully annotated data with only a small fraction of the annotation budget. Also, when used as pretraining, our framework performs better compared to the standard fully supervised setting.
translated by 谷歌翻译
Modern robotic systems are required to operate in challenging environments, which demand reliable localization under challenging conditions. LiDAR-based localization methods, such as the Iterative Closest Point (ICP) algorithm, can suffer in geometrically uninformative environments that are known to deteriorate registration performance and push optimization toward divergence along weakly constrained directions. To overcome this issue, this work proposes i) a robust multi-category (non-)localizability detection module, and ii) a localizability-aware constrained ICP optimization module and couples both in a unified manner. The proposed localizability detection is achieved by utilizing the correspondences between the scan and the map to analyze the alignment strength against the principal directions of the optimization as part of its multi-category LiDAR localizability analysis. In the second part, this localizability analysis is then tightly integrated into the scan-to-map point cloud registration to generate drift-free pose updates along well-constrained directions. The proposed method is thoroughly evaluated and compared to state-of-the-art methods in simulation and during real-world experiments, underlying the gain in performance and reliability in LiDAR-challenging scenarios. In all experiments, the proposed framework demonstrates accurate and generalizable localizability detection and robust pose estimation without environment-specific parameter tuning.
translated by 谷歌翻译
A new development in NLP is the construction of hyperbolic word embeddings. As opposed to their Euclidean counterparts, hyperbolic embeddings are represented not by vectors, but by points in hyperbolic space. This makes the most common basic scheme for constructing document representations, namely the averaging of word vectors, meaningless in the hyperbolic setting. We reinterpret the vector mean as the centroid of the points represented by the vectors, and investigate various hyperbolic centroid schemes and their effectiveness at text classification.
translated by 谷歌翻译
Large pre-trained, zero-shot capable models have shown considerable success both for standard transfer and adaptation tasks, with particular robustness towards distribution shifts. In addition, subsequent fine-tuning can considerably improve performance on a selected downstream task. However, through naive fine-tuning, these zero-shot models lose their generalizability and robustness towards distribution shifts. This is a particular problem for tasks such as Continual Learning (CL), where continuous adaptation has to be performed as new task distributions are introduced sequentially. In this work, we showcase that where fine-tuning falls short to adapt such zero-shot capable models, simple momentum-based weight interpolation can provide consistent improvements for CL tasks in both memory-free and memory-based settings. In particular, we find improvements of over $+4\%$ on standard CL benchmarks, while reducing the error to the upper limit of jointly training on all tasks at once in parts by more than half, allowing the continual learner to inch closer to the joint training limits.
translated by 谷歌翻译
A grand goal in deep learning research is to learn representations capable of generalizing across distribution shifts. Disentanglement is one promising direction aimed at aligning a models representations with the underlying factors generating the data (e.g. color or background). Existing disentanglement methods, however, rely on an often unrealistic assumption: that factors are statistically independent. In reality, factors (like object color and shape) are correlated. To address this limitation, we propose a relaxed disentanglement criterion - the Hausdorff Factorized Support (HFS) criterion - that encourages a factorized support, rather than a factorial distribution, by minimizing a Hausdorff distance. This allows for arbitrary distributions of the factors over their support, including correlations between them. We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over +60% in relative improvement over existing disentanglement methods. In addition, we find that leveraging HFS for representation learning can even facilitate transfer to downstream tasks such as classification under distribution shifts. We hope our original approach and positive empirical results inspire further progress on the open problem of robust generalization.
translated by 谷歌翻译
语义图像合成可以通过允许对正在生成的内容进行指导来控制无条件图像的生成。我们从有条件地从预先训练的自动码图像的矢量量化模型(VQ模型)合成潜在空间。我们发现,共同学习调节和图像潜伏期可以显着提高变压器模型的建模能力,而不是在分别学习的条件潜在和图像潜在的潜在的潜在潜在和图像潜伏期上训练自回旋变压器。尽管我们经过训练的VQ模型在语义和图像潜伏期中都达到了类似的重建性能,但在自动编码阶段将两种模式绑定在一起被证明是提高自动性建模性能的重要组成部分。我们表明,我们的模型使用流行的语义图像数据集ADE20K,CityScapes和Coco-stuff上的自回归模型改进语义图像合成。
translated by 谷歌翻译
最先进的深度学习模型通常经过大量昂贵的标签培训数据培训。但是,需要详尽的手动注释可能会降低该模型在有限标签制度中的普遍性。半监督的学习和无监督的学习提供了有希望的范式,可以从大量未标记的视觉数据中学习。这些范式的最新进展表明,利用未标记的数据来改善模型概括并提供更好的模型初始化的良好好处。在这项调查中,我们从统一的角度回顾了有关半监督学习(SSL)和无监督学习(UL)的最新高级深度学习算法(SSL)。为了对这些领域的最先进的整体了解,我们提出了统一的分类法。我们将现有代表性SSL和UL分类为全面而有见地的分析,以在不同的计算机视觉任务中的不同学习场景和应用中突出其设计理由。最后,我们讨论了SSL和UL的新兴趋势和公开挑战,以阐明未来的关键研究方向。
translated by 谷歌翻译
人类在需要快速传达对象信息的游戏中显示出高级的抽象功能。他们将消息内容分解为多个部分,并以可解释的协议将它们传达。为了为机器提供这种功能,我们提出了基于原始的草图抽象任务,其目标是在预算影响下使用一组固定的绘图原始图表示草图。为了解决这项任务,我们的原始匹配网络(PMN)以自我监督的方式学习了草图的可解释抽象。具体而言,PMN将草图的每个笔划都映射到给定集中最相似的原始性,预测了仿射转换将所选原始词与目标冲程对齐的仿射转换。我们学习了端到端的这一笔触至关重要的映射,当原始草图精确地用预测的原语重建时,距离转换损失是最小的。我们的PMN抽象在经验上取得了素描识别和基于草图的图像检索的最高性能,同时也是高度可解释的。这为草图分析打开了新的可能性,例如通过提取定义对象类别的最相关的原始图来比较草图。代码可在https://github.com/explainableml/sketch-primitives上找到。
translated by 谷歌翻译